operational context
Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report
Lab, Shanghai AI, :, null, Chen, Xiaoyang, Chen, Yunhao, Chen, Zeren, Chen, Zhiyun, Cui, Hanyun, Duan, Yawen, Guo, Jiaxuan, Guo, Qi, Hu, Xuhao, Huang, Hong, Huang, Lige, Li, Chunxiao, Li, Juncheng, Lin, Qihao, Liu, Dongrui, Liu, Xinmin, Liu, Zicheng, Lu, Chaochao, Lu, Xiaoya, Qu, Jingjing, Ren, Qibing, Shao, Jing, Shi, Jingwei, Sun, Jingwei, Wang, Peng, Wang, Weibing, Xu, Jia, Yan, Lewen, Yu, Xiao, Yu, Yi, Zhang, Boxuan, Zhang, Jie, Zhang, Weichen, Zheng, Zhijie, Zhou, Tianyi, Zhou, Bowen
To understand and identify the unprecedented risks posed by rapidly advancing artificial intelligence (AI) models, this report presents a comprehensive assessment of their frontier risks. Drawing on the E-T-C analysis (deployment environment, threat source, enabling capability) from the Frontier AI Risk Management Framework (v1.0) (SafeWork-F1-Framework), we identify critical risks in seven areas: cyber offense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R\&D, strategic deception and scheming, self-replication, and collusion. Guided by the "AI-$45^\circ$ Law," we evaluate these risks using "red lines" (intolerable thresholds) and "yellow lines" (early warning indicators) to define risk zones: green (manageable risk for routine deployment and continuous monitoring), yellow (requiring strengthened mitigations and controlled deployment), and red (necessitating suspension of development and/or deployment). Experimental results show that all recent frontier AI models reside in green and yellow zones, without crossing red lines. Specifically, no evaluated models cross the yellow line for cyber offense or uncontrolled AI R\&D risks. For self-replication, and strategic deception and scheming, most models remain in the green zone, except for certain reasoning models in the yellow zone. In persuasion and manipulation, most models are in the yellow zone due to their effective influence on humans. For biological and chemical risks, we are unable to rule out the possibility of most models residing in the yellow zone, although detailed threat modeling and in-depth assessment are required to make further claims. This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.
- Asia > China > Shanghai > Shanghai (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > California > Los Angeles County > Santa Monica (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law Enforcement & Public Safety (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (4 more...)
Robustness Requirement Coverage using a Situation Coverage Approach for Vision-based AI Systems
Shahbeigi, Sepeedeh, Proma, Nawshin Mannan, Hodge, Victoria, Hawkins, Richard, Li, Boda, Donzella, Valentina
AI-based robots and vehicles are expected to operate safely in complex and dynamic environments, even in the presence of component degradation. In such systems, perception relies on sensors such as cameras to capture environmental data, which is then processed by AI models to support decision-making. However, degradation in sensor performance directly impacts input data quality and can impair AI inference. Specifying safety requirements for all possible sensor degradation scenarios leads to unmanageable complexity and inevitable gaps. In this position paper, we present a novel framework that integrates camera noise factor identification with situation coverage analysis to systematically elicit robustness-related safety requirements for AI-based perception systems. We focus specifically on camera degradation in the automotive domain. Building on an existing framework for identifying degradation modes, we propose involving domain, sensor, and safety experts, and incorporating Operational Design Domain specifications to extend the degradation model by incorporating noise factors relevant to AI performance. Situation coverage analysis is then applied to identify representative operational contexts. This work marks an initial step toward integrating noise factor analysis and situational coverage to support principled formulation and completeness assessment of robustness requirements for camera-based AI perception.
- Europe > United Kingdom > England > West Midlands > Coventry (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.04)
RAG-based User Profiling for Precision Planning in Mixed-precision Over-the-Air Federated Learning
Yuan, Jinsheng, Tang, Yun, Guo, Weisi
Mixed-precision computing, a widely applied technique in AI, offers a larger trade-off space between accuracy and efficiency. The recent purposed Mixed-Precision Over-the-Air Federated Learning (MP-OTA-FL) enables clients to operate at appropriate precision levels based on their heterogeneous hardware, taking advantages of the larger trade-off space while covering the quantization overheads in the mixed-precision modulation scheme for the OTA aggregation process. A key to further exploring the potential of the MP-OTA-FL framework is the optimization of client precision levels. The choice of precision level hinges on multifaceted factors including hardware capability, potential client contribution, and user satisfaction, among which factors can be difficult to define or quantify. In this paper, we propose a RAG-based User Profiling for precision planning framework that integrates retrieval-augmented LLMs and dynamic client profiling to optimize satisfaction and contributions. This includes a hybrid interface for gathering device/user insights and an RAG database storing historical quantization decisions with feedback. Experiments show that our method boosts satisfaction, energy savings, and global model accuracy in MP-OTA-FL systems.
- North America > United States (0.05)
- Europe > United Kingdom (0.04)
Fundamentals of legislation for autonomous artificial intelligence systems
Annotation The article proposes a method for forming a dedicated operational context in course of development and implementation of autonomous corporate management systems based on example of autonomous systems for a board of directors. The significant part of the operational context for autonomous company management systems is the regulatory and legal environment within which corporations operate. In order to create a special operational context for autonomous artificial intelligence systems, the wording of local regulatory documents can be simultaneously presented in two versions: for use by people and for use by autonomous systems. In this case, the artificial intelligence system will get a well-defined operational context that allows such a system to perform functions within the required standards. Local regulations that provide for the specifics of the joint work of individuals and autonomous artificial intelligence systems can create the basis of the relevant legislation governing the development and implementation of autonomous systems.
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Russia (0.04)
- Asia > Middle East > UAE (0.04)
Enterprises Challenged By The Many Guises Of AI
Artificial intelligence and machine learning, which found solid footing among the hyperscalers and is now expanding into the HPC community, are at the top of the list of new technologies that enterprises want to embrace for all kinds of reasons. But it all boils down to the same problem: Sorting through the increasing amounts of data coming into their environments and finding patterns that will help them to run their businesses more efficiently, to make better businesses decisions, and ultimately to make more money. Enterprises are increasingly experimenting with the various frameworks and tools that are on the market and available as open source software, in both small scale experiments run by a growing number of data scientists who have the expertise to find the valuable information the growing lakes of data and in full blown production deployments that are, conceptually, every bit as sophisticated as what the hyperscalers are deploying. The top cloud service providers and hyperscalers have for several years embrace data-driven AI and machine learning techniques and built their own internal frameworks and platforms that enable them to quickly take advantage of them. But as the technologies begin to cascade into more mainstream enterprises, the complexity of software and systems are throwing roadblocks in front of initiatives aimed at leveraging AI and machine learning for the good of the business.